Hello and welcome everybody to the next lecture of Architectures of Supercomputers.
Today we will be talking about caches and nothing else.
Last week I said that we have basically three ways of structuring caches which are known
as direct mapping, fully associative mapping and n-way associative or set associative mapping.
And I also said last week that this is basically the only way to structure a cache and the
former two are special cases of the n-way associative mapping.
And this week we will try to figure out why that is.
Before we explain the third type to structure a cache, we will start with the former two
because they are simple to understand.
Another way to express n-way set associative mapping is to mix the former two, direct placing
and fully associative.
So what does direct mapping mean?
Basically when I have a huge block, when I have a lot of memory and I subdivide my memory
into what I call cache lines.
So this is memory and here is my cache.
My cache is also structured into cache lines of the same size.
So I have to figure out which cache line in the memory could be placed where.
The reason why we would like to keep this placement simple is searching in the cache
might be expensive.
So maybe a simple way of putting cache lines into the cache is sufficient.
And the most simple way we can do this is the direct mapping where we simply take the
main memory address and compute modulo, the size of the cache.
Modulo does not sound like a cheap function because it involves basically division.
And we learned in the last weeks that divisions are always slow.
But usually the size of the cache is a power of two and divisions by a power of two in
a binary coding system are always cheap because I simply cut off some bits and there's nothing
that gets cheaper than throwing away bits.
So what I do is I just look at the main memory address, I chop off some bits and then I get
the cache line where to place my data.
Right now we're just trying to preview what each strategy does.
We will take a more in-depth look on the next slides.
Okay.
So this is basically an illustration that I just have drawn on the map.
What's important here is that if you go through main memory, you will always see the same
pattern of cache lines to which the blocks in main memory correspond.
So this here will belong to the same cache block as this one and so on.
So it repeats which sounds maybe bad because it's so predictive and you would assume that
there will be many cache collisions.
But in reality it works surprisingly well.
Okay.
So in a fully associative cache, we don't have to compute these modular calculations
because each block may be placed anywhere within the cache.
As I said before, in this case when I'm trying to look up data in the cache, I have to search
all cache lines and this may be expensive.
So I need a lot of hardware and it might need a lot of time.
Now somehow my PowerPoint is really slow.
Okay, so as I said before, pretty much any block can belong to any other cache line.
So the end set associative mapping is a combination of the previous two strategies.
So what I do is I don't take my whole cache and let every cache line or every block from
my memory sit anywhere but I will subdivide it into sets and the mapping now consists
Presenters
Zugänglich über
Offener Zugang
Dauer
01:10:11 Min
Aufnahmedatum
2014-11-11
Hochgeladen am
2019-04-04 06:29:03
Sprache
en-US